Integrate flashinfer mm_mxfp8 in ModelOpt MXFP8#35053
Integrate flashinfer mm_mxfp8 in ModelOpt MXFP8#35053mgoin merged 2 commits intovllm-project:mainfrom
Conversation
There was a problem hiding this comment.
Code Review
The pull request integrates the FlashInfer mm_mxfp8 GEMM into vLLM for ModelOpt MXFP8 quantization. The implementation includes necessary swizzling logic for weight scales and dynamic quantization for inputs. However, there are critical issues identified: incorrect keyword arguments in the FlashInfer wrapper will cause runtime errors, and hard assertions on minimum layer dimensions will cause crashes for models with small layers. A fallback mechanism to the emulation backend should be implemented for unsupported shapes.
| assert min_dim <= K, ( | ||
| f"mm_mxfp8 requires K >= {min_dim}, got K={K}. " | ||
| f"in_features is too small for mm_mxfp8." | ||
| ) | ||
| assert K % MXFP8_BLOCK_SIZE == 0, ( | ||
| f"mm_mxfp8 requires K to be divisible by {MXFP8_BLOCK_SIZE}, got K={K}." | ||
| ) | ||
| assert min_dim <= N, ( | ||
| f"mm_mxfp8 requires N >= {min_dim}, got N={N}. " | ||
| f"out_features is too small for mm_mxfp8." | ||
| ) |
There was a problem hiding this comment.
These hard assertions on min_dim=128 for K and N will cause vLLM to crash if the model contains any linear layers with dimensions smaller than 128 (e.g., router gates or small projection layers). Instead of crashing, the implementation should detect unsupported shapes and fall back to the EMULATION backend for those specific layers. Note that this requires ensuring the weight scales are processed correctly (not swizzled) for the fallback backend during the weight loading phase.
There was a problem hiding this comment.
I prefer not to fall back to emulation and instead raise an error.
The emulation has lower performance compared to cutlass and users may not notice that fallback was triggered.
The ModelOpt MXFP8 support is new, changes to backend selection logic can be added later as needed.
There was a problem hiding this comment.
Assertions can be thrown before kernel execution, for example, in post weight processing if we recognize that the model has incompatible shapes for the given kernel backend. If kernel apply is only point of failure / check, it would error out much later, only when the kernel is invoked.
There was a problem hiding this comment.
If @mgoin merges his PR #34664 first (Marlin MXFP8 GEMM) I will align my PR to his.
In that case I'll add a select_mxfp8_linear_backend function that will select cutlass / marlin / emulation backend (fallback to marlin if cutlass is not supported).
Maybe an assert should be used only if the user uses an env-var to force cutlass MXFP8 GEMM (or follow similar logic to existing NVFP4 / FP8 "select_*_backend").
c8cfb8a to
bd4e687
Compare
Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
e915121 to
0d41ab8
Compare
Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com> Signed-off-by: Andrii Skliar <askliar@nvidia.com>
Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
Purpose
Follow up to PR:
#33786
Flashinfer version was recently updated in vLLM (to v0.6.4).
A new MXFP8 GEMM (CUTLASS) is available -
mm_mxfp8:flashinfer-ai/flashinfer#2464
This PR integrates this GEMM into vLLM (for ModelOpt MXFP8).
Test Plan
Use the following model for testing (used in other related PRs):
https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B
Compare performance (tok/sec) and lm_eval results between the original BF16 model and MXFP8 model.
Test Result
Eval / accuracy
Command:
Results (GPU B200):
Performance benchmark
Command:
Results (GPU B200):
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.